GAP/MPI: Facilitating parallelism

نویسنده

  • Gene Cooperman
چکیده

The goal of this work is to overcome the learning barriers faced when first using parallelism. Currently, in order to parallelize a system such as GAP, one must embed a message passing library such as MPI, with many routines and many parameters. GAP/MPI provides a simple, task-oriented interface sitting above the MPI library. The system presents the end-user with a single SPMD (single program, multiple data) environment in GAP: an existing, familiar interactive language. In GAP/MPI one describes the end application in terms of high level tasks, which are invoked by a single procedure call in GAP/MPI. This eliminates the complexities of a message passing library, such as encoding a message in a suitable data structure, message synchronization, communication topologies and deadlock avoidance.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Verifying Parallel Programs with MPI-Spin

Standard testing and debugging techniques are notoriously ineffective when applied to parallel programs, due to the numerous sources of nondeterminism arising from parallelism. MPI-Spin, an extension of the model checker Spin for verifying and debugging MPI-based parallel programs, overcomes many of the limitations associated with the standard techniques. By exploring all possible executions of...

متن کامل

Noncollective Communicator Creation in MPI

MPI communicators abstract communication operations across application modules, facilitating seamless composition of different libraries. In addition, communicators provide the ability to form groups of processes and establish multiple levels of parallelism. Traditionally, communicators have been collectively created in the context of the parent communicator. The recent thrust toward systems at...

متن کامل

MXNET-MPI: Embedding MPI parallelism in Parameter Server Task Model for scaling Deep Learning

Existing Deep Learning frameworks exclusively use either Parameter Server(PS) approach or MPI parallelism. In this paper, we discuss the drawbacks of such approaches and propose a generic framework supporting both PS and MPI programming paradigms, co-existing at the same time. The key advantage of the new model is to embed the scaling benefits of MPI parallelism into the loosely coupled PS task...

متن کامل

Communicating data-parallel tasks: an MPI library for HPF

High Performance Fortran (HPF) has emerged as a standard dialect of Fortran for data-parallel computing. However, HPF does not support task parallelism or heterogeneous computing adequately. This paper presents a summary of our work on a library-based approach to support task parallelism, using MPI as a coordination layer for HPF. This library enables a wide variety of applications, such as mul...

متن کامل

Overlapping communication and computation with OpenMP and MPI

Machines comprised of a distributed collection of shared memory or SMP nodes are becoming common for parallel computing. OpenMP can be combined with MPI on many such machines. Motivations for combing OpenMP and MPI are discussed. While OpenMP is typically used for exploiting loop-level parallelism it can also be used to enable coarse grain parallelism, potentially leading to less overhead. We s...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1995